Skip to content

[codex] Support GPT-5 OpenAI token limits#29

Merged
intertwine merged 1 commit intomainfrom
codex/gpt5-openai-token-limit
Apr 11, 2026
Merged

[codex] Support GPT-5 OpenAI token limits#29
intertwine merged 1 commit intomainfrom
codex/gpt5-openai-token-limit

Conversation

@intertwine
Copy link
Copy Markdown
Owner

Summary

  • Route GPT-5 and o-series OpenAI Chat Completions requests through max_completion_tokens instead of max_tokens.
  • Keep existing max_tokens behavior for older OpenAI chat models such as gpt-4o-mini.
  • Add focused unit coverage for GPT-5, GPT-5 chat alias, o-series, and legacy chat token limit parameters.

Root Cause

om doctor --validate-key failed when OM_LLM_MODEL=gpt-5.4 because the OpenAI adapter always sent max_tokens. GPT-5-class models reject that parameter and require max_completion_tokens.

Validation

  • uv run pytest tests/test_llm.py
  • uv run pytest
  • uv run ruff check src tests
  • uv run om doctor --validate-key confirmed the OpenAI call succeeds with gpt-5.4; unrelated QMD/launchd environment checks were later restored to green in the installed OM environment with om install --both --scheduler launchd --provider openai --llm-model gpt-5.4 --non-interactive.

@intertwine intertwine marked this pull request as ready for review April 11, 2026 16:51
@intertwine intertwine merged commit 7915e26 into main Apr 11, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant